Perplexity (PPL) is a measurement of how "surprised" a language model is by a sequence of words. It can be thought of as the model's confusion level when predicting the next word.
Use the slider below to see how a model's "cone of possibilities" changes as its perplexity shifts from confident to confused.